# Code Reasoning

Qwen2.5-Coder-0.5B-Instruct-GPTQ-Int8
Qwen2.5 Coder 0.5B Instruct GPTQ Int8
Qwen2.5-Coder is the latest series of the Qwen large language model, focusing on code generation, reasoning, and debugging. Based on the robust Qwen2.5, this series significantly improves code generation, reasoning, and repair capabilities by incorporating 55 trillion training tokens, including source code, text code grounding, and synthetic data. The Qwen2.5-Coder-32B has emerged as the most advanced open-source large language model for code generation, matching the coding capabilities of GPT-4o. Additionally, Qwen2.5-Coder offers a more comprehensive foundation for real-world applications, such as code agents, enhancing coding abilities while maintaining strengths in mathematics and general proficiency.
Coding Assistant
45.5K
Qwen2.5-Coder-0.5B-Instruct-AWQ
Qwen2.5 Coder 0.5B Instruct AWQ
Qwen2.5-Coder represents the latest series of the Qwen large language models, focusing on code generation, reasoning, and repair. Built on the robust foundations of Qwen2.5, with a training corpus expanded to 5.5 trillion tokens that includes source code, textual code bases, and synthetic data, Qwen2.5-Coder-32B has emerged as the leading open-source code LLM, matching the coding capabilities of GPT-4o. This model is a 4-bit instruction-tuned version of the 0.5B parameters, featuring characteristics such as causal language modeling, pre-training and fine-tuning, as well as a transformer architecture.
Code Reasoning
45.3K
Qwen2.5-Coder-1.5B-Instruct-GPTQ-Int4
Qwen2.5 Coder 1.5B Instruct GPTQ Int4
Qwen2.5-Coder is the latest series from the Qwen large language model, focusing on code generation, reasoning, and debugging. Built upon the powerful Qwen2.5 framework, this model has been trained on 550 trillion source code, text-code correlations, and synthesized data, making it one of the leading open-source code language models today, rivaling GPT-4 in coding ability. Additionally, Qwen2.5-Coder offers comprehensive real-world application capabilities, such as code agents, enhancing coding proficiency while maintaining strengths in mathematical and general skills.
Code Reasoning
44.7K
Qwen2.5-Coder-1.5B-Instruct-GPTQ-Int8
Qwen2.5 Coder 1.5B Instruct GPTQ Int8
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, reasoning, and debugging. Based on the powerful Qwen2.5 architecture, this model was trained on 550 trillion source codes, text-code associations, synthetic data, and more, making it a leader among current open-source code language models. It not only enhances programming capabilities but also retains advantages in mathematics and general-purpose tasks.
Code Reasoning
45.0K
Qwen2.5-Coder-1.5B-Instruct-GGUF
Qwen2.5 Coder 1.5B Instruct GGUF
Qwen2.5-Coder is the latest series of the Qwen large language model, specifically designed for code generation, reasoning, and debugging. Built on the powerful Qwen2.5 framework, it has scaled training tokens to 5.5 trillion, incorporating source code, text code bases, synthetic data, among others. Qwen2.5-Coder-32B has emerged as the most advanced open-source large language model for code, matching the coding capabilities of GPT-4o. This model is a 1.5B parameter instruction-tuned version in GGUF format, featuring causal language modeling, pre-training and post-training phases, and a transformers architecture.
Code Reasoning
50.0K
Qwen2.5-Coder-1.5B-Instruct-AWQ
Qwen2.5 Coder 1.5B Instruct AWQ
Qwen2.5-Coder is the latest series of the Qwen large language model, designed for code generation, reasoning, and fixing. Built on the powerful Qwen2.5, this model has been trained on 550 trillion source codes, text code foundations, and synthetic data, elevating its coding capabilities to the forefront of open-source code LLMs. It not only enhances coding abilities but also maintains strengths in mathematics and general capabilities.
Code Reasoning
44.4K
Qwen2.5-Coder-3B-Instruct-GPTQ-Int8
Qwen2.5 Coder 3B Instruct GPTQ Int8
The Qwen2.5-Coder-3B-Instruct-GPTQ-Int8 is a large language model optimized for code generation, reasoning, and debugging, part of the Qwen2.5-Coder series. Based on Qwen2.5, it has been trained on a dataset including source code, code-text associations, and synthetic data, achieving 550 trillion training tokens. The Qwen2.5-Coder-32B has emerged as the leading open-source large language model for code, matching coding capabilities with GPT-4o. This model also provides a comprehensive foundation for real-world applications such as code assistance, augmenting coding capabilities while maintaining strengths in mathematics and general skills.
Code Reasoning
43.9K
Qwen2.5-Coder-3B-Instruct-GGUF
Qwen2.5 Coder 3B Instruct GGUF
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, reasoning, and repair. Built on the powerful Qwen2.5, it has been trained on a dataset of 550 trillion tokens including source code, code-grounded texts, and synthetic data. Qwen2.5-Coder-32B has emerged as the most advanced open-source code large language model, matching the coding capabilities of GPT-4o. In practical applications, it provides a more comprehensive foundation, such as a code agent, enhancing coding prowess while retaining advantages in math and general abilities.
Code Reasoning
46.4K
Qwen2.5-Coder-14B-Instruct-AWQ
Qwen2.5 Coder 14B Instruct AWQ
Qwen2.5-Coder is a series of large language models specifically designed for coding, ranging from 0.5 billion to 3.2 billion parameters to meet various developer needs. The model shows significant improvements in code generation, reasoning, and repair, leveraging a powerful Qwen2.5 framework trained on 5.5 trillion tokens, including source code, textual code bases, and synthetic data. Qwen2.5-Coder-32B is currently the most advanced open-source large language model for code generation, matching the coding capabilities of GPT-4o. Additionally, the model supports long contexts of up to 128K tokens and employs AWQ 4-bit quantization technology to enhance its efficiency and performance.
Code Reasoning
51.9K
Qwen2.5-Coder-32B-Instruct-GGUF
Qwen2.5 Coder 32B Instruct GGUF
Qwen2.5-Coder is a model specifically designed for code generation, significantly improving capabilities in this area, with a variety of parameter sizes and support for quantization. It is free and enhances efficiency and quality for developers.
Code Reasoning
50.0K
Qwen2.5-Coder-0.5B-Instruct
Qwen2.5 Coder 0.5B Instruct
Qwen2.5-Coder is the latest series of the Qwen large language model, focusing on code generation, reasoning, and fixing. Built on the powerful Qwen2.5 with an extended training dataset of 5.5 trillion tokens that includes source code, text code bases, and synthetic data, Qwen2.5-Coder-32B has become the leading open-source code LLM, matching GPT-4o in coding abilities. This model not only enhances coding capabilities but also maintains superiority in mathematics and general abilities, providing a comprehensive foundation for real-world applications like code assistance.
Coding Assistant
44.7K
Qwen2.5-Coder-0.5B
Qwen2.5 Coder 0.5B
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, code reasoning, and code repair. Built upon the powerful Qwen2.5 model, this series significantly enhances coding capabilities by adding a training token volume of 5.5 trillion, which includes source code, text code bases, and synthetic data. Qwen2.5-Coder-32B has become the current state-of-the-art open-source large language model for code, comparable in coding ability to GPT-4o. Moreover, Qwen2.5-Coder provides a more comprehensive foundation for practical applications like code agents, enhancing coding abilities while maintaining strengths in mathematics and general capabilities.
Coding Assistant
47.5K
Qwen2.5-Coder-1.5B
Qwen2.5 Coder 1.5B
Qwen2.5-Coder-1.5B is a large language model in the Qwen2.5-Coder series, focusing on code generation, reasoning, and debugging. Built upon the robust Qwen2.5 architecture, this model has significantly expanded the training tokens to 5.5 trillion, incorporating source code, textual code bases, synthetic data, and more, making it a leader among open-source code LLMs, rivaling GPT-4o's coding capabilities. Moreover, Qwen2.5-Coder-1.5B has enhanced its mathematical and general capabilities, providing a more comprehensive foundation for practical applications such as code agents.
Coding Assistant
49.7K
Qwen2.5-Coder-1.5B-Instruct
Qwen2.5 Coder 1.5B Instruct
Qwen2.5-Coder is the latest series in the Qwen large language model family, focusing on code generation, code reasoning, and code fixing. Leveraging the powerful capabilities of Qwen2.5, the model was trained on 55 trillion source code, textual code bases, synthetic data, and more, making it a leader among open-source code generation language models, comparable in coding ability to GPT-4o. It not only enhances coding capability but also retains strengths in mathematics and general skills, providing a robust foundation for practical applications such as code agents.
Coding Assistant
48.6K
Qwen2.5-Coder-3B
Qwen2.5 Coder 3B
Qwen2.5-Coder-3B is a large language model within the Qwen2.5-Coder series, emphasizing code generation, reasoning, and debugging. Built on the robust Qwen2.5 architecture, this model significantly improves code generation, reasoning, and debugging capabilities by increasing training tokens to 5.5 trillion, incorporating source code, textual code fundamentals, synthetic data, and more. The Qwen2.5-Coder-32B has become the most advanced open-source code language model available, matching the encoding capabilities of GPT-4o. Furthermore, Qwen2.5-Coder-3B provides a comprehensive foundation for real-world applications like code assistants, enhancing coding capabilities while maintaining strengths in mathematics and general comprehension.
Coding Assistant
63.8K
Qwen2.5-Coder-3B-Instruct
Qwen2.5 Coder 3B Instruct
Qwen2.5-Coder is the latest series of the Qwen large language model, focused on code generation, reasoning, and repair. Based on the powerful Qwen2.5, this model series significantly enhances code generation, reasoning, and repair capabilities by increasing training tokens to 5.5 trillion, including source code, text grounding, synthetic data, and more. The Qwen2.5-Coder-3B model contains 3.09B parameters, 36 layers, 16 attention heads (Q), and 2 attention heads (KV), with a total context length of 32,768 tokens. It stands out among open-source code LLMs, matching the coding capabilities of GPT-4o, and provides developers with a powerful code assistance tool.
Coding Assistant
46.1K
Qwen2.5-Coder-7B
Qwen2.5 Coder 7B
Qwen2.5-Coder-7B is a large language model based on Qwen2.5, focusing on code generation, reasoning, and correction. It has been trained on 5.5 trillion tokens, including source code, textual code grounding, synthetic data, etc., representing the latest advancements in open-source code language models. This model not only matches GPT-4o in programming capabilities but also retains advantages in mathematics and general skills, supporting long contexts of up to 128K tokens.
Coding Assistant
45.3K
Qwen2.5-Coder-7B-Instruct
Qwen2.5 Coder 7B Instruct
Qwen2.5-Coder-7B-Instruct is a large language model specifically designed for code, part of the Qwen2.5-Coder series which includes six mainstream model sizes: 0.5, 1.5, 3, 7, 14, and 32 billion parameters to meet the diverse needs of developers. This model shows significant improvements in code generation, reasoning, and debugging, trained on an extensive dataset of 5.5 trillion tokens that includes source code, code-related textual data, and synthetic data. The Qwen2.5-Coder-32B represents the latest advancement in open-source code LLMs, matching the coding capabilities of GPT-4o. Moreover, it supports long context lengths of up to 128K tokens, providing a solid foundation for practical applications like code agents.
Coding Assistant
44.2K
Qwen2.5-Coder Technical Report
Qwen2.5 Coder Technical Report
The Qwen2.5-Coder series consists of code-specific models based on the Qwen2.5 architecture, including Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B. These models continue to be pre-trained on a massive corpus of over 5.5 trillion tokens, showcasing impressive code generation capabilities while maintaining generality through meticulous data cleaning, scalable synthetic data generation, and balanced data mixing. Qwen2.5-Coder has achieved state-of-the-art performance in over ten benchmark tests across various code-related tasks, including code generation, completion, reasoning, and repair, consistently outperforming larger models of comparable size. The release of this series not only pushes the boundaries of intelligent coding research but also encourages developers to adopt it for real-world applications through its licensing.
Coding Assistant
61.8K
Qwen2.5-Coder-14B
Qwen2.5 Coder 14B
Qwen2.5-Coder-14B is a large language model in the Qwen series focused on code, encompassing various model sizes ranging from 0.5 to 32 billion parameters to meet diverse developer needs. The model shows significant improvements in code generation, reasoning, and repair, built upon the powerful Qwen2.5, with a training token expansion to 5.5 trillion, including source code, grounded text code, and synthetic data. Qwen2.5-Coder-32B has become the leading open-source code LLM, matching the coding capacity of GPT-4o. Additionally, it provides a comprehensive foundation for real-world applications such as code agents, enhancing coding abilities while maintaining advantages in mathematics and general tasks. It supports long contexts of up to 128K tokens.
Coding Assistant
48.3K
Qwen2.5-Coder-14B-Instruct
Qwen2.5 Coder 14B Instruct
Qwen2.5-Coder-14B-Instruct is a large language model in the Qwen2.5-Coder series, focusing on code generation, reasoning, and repair. Built upon the powerful Qwen2.5, this model is trained on 5.5 trillion tokens, including source code and synthesized data, making it a leading open-source code LLM. It not only enhances coding capabilities but also maintains strengths in mathematics and general abilities while supporting long contexts of up to 128K tokens.
Coding Assistant
46.1K
Qwen2.5-Coder-32B
Qwen2.5 Coder 32B
Qwen2.5-Coder-32B is a code generation model based on Qwen2.5, featuring 32 billion parameters, making it one of the largest open-source code language models available today. It shows significant improvements in code generation, reasoning, and fixing, capable of handling long texts up to 128K tokens, which is suitable for practical applications such as code assistants. The model also maintains advantages in mathematical and general capabilities, supporting long text processing, thus serving as a powerful assistant for developers in code development.
Coding Assistant
48.3K
Qwen2.5-Coder-32B-Instruct
Qwen2.5 Coder 32B Instruct
Qwen2.5-Coder represents a series of large language models designed specifically for code generation, featuring six mainstream model sizes with 0.5, 1.5, 3, 7, 14, and 32 billion parameters to meet diverse developers' needs. This model has made significant improvements in code generation, reasoning, and repair, built upon the robust Qwen2.5, trained on a token count expanding to 5.5 trillion, including source code, text code basics, synthetic data, and more. The Qwen2.5-Coder-32B is currently the most advanced open-source code generation large language model, rivaling the encoding capabilities of GPT-4o. It not only enhances coding abilities but also retains advantages in mathematics and general understanding, supporting long contexts of up to 128K tokens.
Coding Assistant
46.1K
Featured AI Tools
Flow AI
Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
43.1K
NoCode
Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
44.7K
ListenHub
Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
42.5K
MiniMax Agent
Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
43.3K
Chinese Picks
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
42.2K
OpenMemory MCP
Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
42.8K
FastVLM
Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
41.7K
Chinese Picks
LiblibAI
Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase